Scaffold DAK AI Skill Library infrastructure with BPMN support#288
Scaffold DAK AI Skill Library infrastructure with BPMN support#288
Conversation
Create the full directory structure under .github/skills/ with: - Common module (smart_llm_facade.py, prompts.py, ig_errors.py, fsh_utils.py) - BPMN author skill (validators, prompts, actions) - BPMN import skill (validators, prompts, actions) - IG publisher skill (prompts, actions) - DAK authoring skill (classify_issue_action.py with keyword lists) - Stub skills (l1_review, l3_review, translation) - CLI entry point and skills registry - Dockerfile and docker-compose.yml - GitHub Actions workflows (classify-issue, skill-l1/l2/l3/translation, pr-validate-slash) - .env.example and .gitignore updates - Label JSON files for content:L2, content:L3, content:translation Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
…ives - Deduplicate Zeebe/Camunda namespace detection to report once per namespace - Add word-boundary matching for short keywords (≤3 chars) in classifier to prevent false positives from language codes like 'ar' matching 'pharmacist' Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
- Fix docstring style consistency in classify_issue_action.py - Clarify _SafeDict docstring in prompts.py - Remove unnecessary noqa comment in smart_llm_facade.py - Add comment explaining --break-system-packages in Dockerfile - Document dak.json check in classify-issue.yml workflow - Add return type annotation to main() in classify_issue_action.py Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
Replace direct checkout+run of untrusted PR code with safe dispatch pattern (matching pr-deploy-slash.yml). The workflow now dispatches to ghbuild.yml instead of executing scripts from the PR branch in a privileged issue_comment context. Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
|
@copilot |
✅ Deployment SuccessfulBranch:
Deployment completed successfully! The site is now live at the URL above. |
✅ Deployment SuccessfulBranch:
Deployment completed successfully! The site is now live at the URL above. |
Remove the custom SmartLLMFacade class. LiteLLM (20k+ GitHub stars, MIT) is already the trusted external library. The new llm_utils.py provides only DAK-specific env var bridging and JSON-extraction helpers on top of litellm.completion() — no custom facade to maintain. Rename: smart_llm_facade.py → llm_utils.py Export: dak_completion(), parse_json_response(), is_llm_available(), get_llm_config() Update: all 4 callers (classify, dak_authoring, bpmn_author, interpret_errors) Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
The Replaced the custom class with direct LiteLLM usage in |
Regenerated via IG Publisher and extract_translations.py. Triggered by: Copilot
|
what is labels/ directory for? do we need it there? also where is l2content review skill? provide instructions on values for .env.example (what are keys for model? where is the master list of the valid models?) @copilot |
❌ Deployment FailedBranch: Deployment failed. Please check the build log for details. |
❌ Deployment FailedBranch: Deployment failed. Please check the build log for details. |
…arify L2 skill in README - labels/ moved from repo root into .github/skills/labels/ (reference only) - .env.example now documents per-provider key formats, popular model names, and links to the LiteLLM master model list (https://docs.litellm.ai/docs/providers) - README clarifies that the L2 content review skill is dak_authoring/actions/dak_authoring_action.py, triggered by the content:L2 label via skill-l2-dak.yml Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
Regenerated via IG Publisher and extract_translations.py. Triggered by: litlfred
Addressed in
|
|
Origin upstream changes to.pot |
|
@copilot see previous |
…lation infrastructure Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
Merged upstream main .pot changes in
|
There was a problem hiding this comment.
Pull request overview
This PR scaffolds the DAK AI Skill Library infrastructure for the WHO SMART Base repository, introducing a comprehensive framework for AI-assisted and structural validation tools for authoring WHO Digital Adaptation Kit (DAK) content. It adds BPMN authoring/validation skills, issue classification (keyword + LLM), and IG Publisher integration, all organized under .github/skills/ with corresponding GitHub Actions workflows.
Changes:
- Adds a full skill library under
.github/skills/with CLI entry point, Docker support, shared utilities (LLM via LiteLLM, FSH helpers, IG errors, prompt templates), and seven skills (bpmn_author, bpmn_import, ig_publisher, dak_authoring, l1_review, l3_review, translation) - Introduces six GitHub Actions workflows: issue classification (
classify-issue.yml), four label-triggered skill workflows (skill-l1-review.yml,skill-l2-dak.yml,skill-l3-review.yml,skill-translation.yml), and a PR/validateslash command (pr-validate-slash.yml) - Adds supporting configuration:
.env.examplefor local LLM development,.gitignorefor.envfiles, label JSON definitions, Docker/docker-compose setup, and skills registry
Reviewed changes
Copilot reviewed 78 out of 88 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
.github/skills/README.md |
Documentation for the DAK Skill Library |
.github/skills/Dockerfile |
Docker image for local development mirroring CI |
.github/skills/docker-compose.yml |
Docker Compose service aliases for skill commands |
.github/skills/skills_registry.yaml |
Registry of all registered skills and their capabilities |
.github/skills/cli/dak_skill.py |
CLI entry point routing commands to skill actions |
.github/skills/common/llm_utils.py |
LiteLLM wrapper with DAK env-var bridging and JSON extraction |
.github/skills/common/prompts.py |
Prompt template loader with {variable} substitution |
.github/skills/common/ig_errors.py |
Validation issue severity constants and formatting |
.github/skills/common/fsh_utils.py |
FSH file utilities (ID sanitization, actor path resolution) |
.github/skills/common/ig_publisher_iface.py |
Thin wrapper for invoking the FHIR IG Publisher |
.github/skills/common/prompts/*.md |
Shared prompt templates (BPMN constraints, XML schema, actor context) |
.github/skills/bpmn_author/ |
BPMN authoring skill with XML and swimlane validators |
.github/skills/bpmn_import/ |
BPMN import skill with swimlane→ActorDefinition validator |
.github/skills/ig_publisher/ |
IG Publisher validation, build, and error interpretation actions |
.github/skills/dak_authoring/ |
Issue classifier (keyword + LLM) and L2 content authoring |
.github/skills/l1_review/, l3_review/, translation/ |
Placeholder skills for future implementation |
.github/skills/labels/*.json |
GitHub label definitions for content:L1/L2/L3/translation |
.github/workflows/classify-issue.yml |
Workflow to auto-classify issues on open/edit |
.github/workflows/skill-l1-review.yml |
Workflow triggered by content:L1 label |
.github/workflows/skill-l2-dak.yml |
Workflow triggered by content:L2 label |
.github/workflows/skill-l3-review.yml |
Workflow triggered by content:L3 label |
.github/workflows/skill-translation.yml |
Workflow triggered by content:translation label |
.github/workflows/pr-validate-slash.yml |
/validate slash command for PR validation |
.env.example |
Local environment template with LLM provider documentation |
.gitignore |
Added .env to gitignore |
input/*/translations/*.pot |
Timestamp-only updates to translation templates |
| # Prefer the repo's own runner script if present | ||
| runner_script = ig_root_path / "input" / "scripts" / "run_ig_publisher.py" | ||
| if runner_script.is_file(): | ||
| cmd = [sys.executable, str(runner_script)] | ||
| else: | ||
| jar = os.environ.get( | ||
| "PUBLISHER_JAR", | ||
| str(ig_root_path / "input-cache" / "publisher.jar"), | ||
| ) | ||
| cmd = ["java", "-jar", jar, "-ig", str(ig_root_path)] | ||
|
|
||
| if tx_server: | ||
| cmd.extend(["-tx", tx_server]) | ||
| if extra_args: |
There was a problem hiding this comment.
When run_ig_publisher.py is found as the runner script, the code appends ["-tx", tx_server] to the command. However, run_ig_publisher.py uses argparse with --tx (double-dash), not -tx (single-dash as the Java IG Publisher JAR expects). This means python3 run_ig_publisher.py -tx n/a will fail with an unrecognized argument error from argparse.
When building the command for the Python runner, the -tx flag should be --tx instead. Consider branching the flag format based on which executable is being used.
There was a problem hiding this comment.
Applied in c87fd25 — the tx_flag is now --tx when using the Python run_ig_publisher.py (argparse) and -tx for the Java JAR path. Also addressed the other automated review findings in the same commit:
- Added missing
__init__.pyin allactions/dirs - Renamed
prompts.py→prompt_loader.pyto avoid collision withprompts/dir - Fixed
bpmn_author_action.pyto load constraints/schema/actor_context into the prompt template - Added
pip install litellm requeststo all skill workflows - Added
author_associationcheck topr-validate-slash.yml - Moved
import reto top, replaced overly generic"string"keyword, fixed docstring typo
| # Translation issues | ||
| "mistranslation", "mistranslated", "wrong translation", | ||
| "translation error", "translation review", "translation update", | ||
| "string", "untranslated", "missing translation", |
There was a problem hiding this comment.
The keyword "string" is extremely generic and will cause false-positive content:translation labels on virtually any programming-related issue (e.g., "fix string formatting", "parse JSON string"). Consider removing it or replacing with a more specific phrase like "translation string" or "translatable string". Similarly, other very broad keywords exist across the lists (e.g., "system" in L3_KEYWORDS, "management" in L1_KEYWORDS) that will cause most issues to receive multiple content labels in the keyword fallback path.
There was a problem hiding this comment.
Already applied in c87fd25 — replaced "string" with "translatable string" and removed/tightened other overly generic keywords ("system" → "health information system", "management" → "case management").
| _COMMANDS = { | ||
| "validate": "ig_publisher.actions.validate_dak_action", | ||
| "validate-ig": "ig_publisher.actions.validate_ig_action", | ||
| "build-ig": "ig_publisher.actions.build_ig_action", | ||
| "import-bpmn": "bpmn_import.actions.bpmn_import_action", | ||
| "author": "bpmn_author.actions.bpmn_author_action", | ||
| "classify": "dak_authoring.actions.classify_issue_action", | ||
| "interpret-errors": "ig_publisher.actions.interpret_errors_action", | ||
| } |
There was a problem hiding this comment.
All actions/ subdirectories are missing __init__.py files (e.g., ig_publisher/actions/, bpmn_author/actions/, bpmn_import/actions/, dak_authoring/actions/). The CLI entry point in dak_skill.py uses importlib.import_module("ig_publisher.actions.validate_dak_action") etc., which requires every intermediate directory to be a Python package (i.e., contain __init__.py). Without these files, all CLI commands will fail with ModuleNotFoundError: No module named 'ig_publisher.actions'.
Add empty __init__.py files to each actions/ subdirectory: ig_publisher/actions/, bpmn_author/actions/, bpmn_import/actions/, and dak_authoring/actions/.
There was a problem hiding this comment.
Already applied in c87fd25 — added empty __init__.py to all 7 actions/ subdirectories (bpmn_author, bpmn_import, ig_publisher, dak_authoring, l1_review, l3_review, translation).
| - name: Classify and label | ||
| if: steps.dak.outputs.enabled == 'true' | ||
| env: | ||
| GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| DAK_LLM_API_KEY: ${{ secrets.DAK_LLM_API_KEY }} | ||
| DAK_LLM_MODEL: ${{ vars.DAK_LLM_MODEL || 'gpt-4o-mini' }} | ||
| ISSUE_NUMBER: ${{ github.event.issue.number }} | ||
| ISSUE_TITLE: ${{ github.event.issue.title }} | ||
| ISSUE_BODY: ${{ github.event.issue.body }} | ||
| run: python3 .github/skills/dak_authoring/actions/classify_issue_action.py |
There was a problem hiding this comment.
This workflow (and all other skill workflows like skill-l1-review.yml, skill-l2-dak.yml, skill-l3-review.yml) runs on ubuntu-latest without installing Python dependencies. When DAK_LLM_API_KEY is configured as a repository secret, the action will attempt to import litellm, which isn't available on ubuntu-latest, causing an ImportError. Similarly, classify_issue_action.py needs requests for the apply_labels() call.
Add a step before running the action to install the required dependencies, for example:
pip install litellm requests (or pip install -r a requirements file). The keyword-only path (no LLM key) happens to work because litellm is imported lazily, but this breaks as soon as a repo owner configures the DAK_LLM_API_KEY secret.
There was a problem hiding this comment.
Already applied in c87fd25 — added pip install litellm requests step to classify-issue.yml, skill-l1-review.yml, skill-l2-dak.yml, and skill-l3-review.yml.
| prompt = load_prompt( | ||
| "bpmn_author", "create_or_edit_bpmn", | ||
| user_request=f"{issue_title}\n\n{issue_body}", | ||
| current_bpmn="(none — creating new BPMN)", |
There was a problem hiding this comment.
The load_prompt call provides only user_request and current_bpmn, but the create_or_edit_bpmn.md template also contains {dak_bpmn_constraints}, {bpmn_xml_schema}, and {actor_context} placeholders. Because _SafeDict is used (in prompts.py), these unresolved placeholders will appear as literal text (e.g., {dak_bpmn_constraints}) in the prompt sent to the LLM, effectively leaving three important sections empty. The constraints and schema content from common/prompts/dak_bpmn_constraints.md, common/prompts/bpmn_xml_schema.md, and common/prompts/actor_context.md should be loaded and passed as variables to fill these placeholders.
| prompt = load_prompt( | |
| "bpmn_author", "create_or_edit_bpmn", | |
| user_request=f"{issue_title}\n\n{issue_body}", | |
| current_bpmn="(none — creating new BPMN)", | |
| # Load additional prompt components required by the create_or_edit_bpmn template | |
| dak_bpmn_constraints_path = _SKILLS_ROOT / "common" / "prompts" / "dak_bpmn_constraints.md" | |
| bpmn_xml_schema_path = _SKILLS_ROOT / "common" / "prompts" / "bpmn_xml_schema.md" | |
| actor_context_path = _SKILLS_ROOT / "common" / "prompts" / "actor_context.md" | |
| with dak_bpmn_constraints_path.open(encoding="utf-8") as f: | |
| dak_bpmn_constraints = f.read() | |
| with bpmn_xml_schema_path.open(encoding="utf-8") as f: | |
| bpmn_xml_schema = f.read() | |
| with actor_context_path.open(encoding="utf-8") as f: | |
| actor_context = f.read() | |
| prompt = load_prompt( | |
| "bpmn_author", | |
| "create_or_edit_bpmn", | |
| user_request=f"{issue_title}\n\n{issue_body}", | |
| current_bpmn="(none — creating new BPMN)", | |
| dak_bpmn_constraints=dak_bpmn_constraints, | |
| bpmn_xml_schema=bpmn_xml_schema, | |
| actor_context=actor_context, |
| """ | ||
| Prompt loader for DAK skill actions. | ||
|
|
||
| Prompts are stored as Markdown files with ``{variable}`` placeholders. | ||
| ``load_prompt()`` reads the file and substitutes variables using | ||
| ``str.format_map``. | ||
|
|
||
| Usage: | ||
| from common.prompts import load_prompt | ||
|
|
||
| prompt = load_prompt("bpmn_author", "create_or_edit_bpmn", | ||
| bpmn_xml="<definitions ...>", | ||
| user_request="Add a pharmacy lane") | ||
| """ | ||
|
|
||
| import os | ||
| from pathlib import Path | ||
| from typing import Any | ||
|
|
||
|
|
||
| # Root of the skills directory (parent of common/) | ||
| _SKILLS_ROOT = Path(__file__).resolve().parent.parent | ||
|
|
||
|
|
||
| def load_prompt(skill_name: str, prompt_name: str, **variables: Any) -> str: | ||
| """Load a ``.md`` prompt template and fill ``{variable}`` placeholders. | ||
|
|
||
| The file is resolved as:: | ||
|
|
||
| .github/skills/<skill_name>/prompts/<prompt_name>.md | ||
|
|
||
| Falls back to:: | ||
|
|
||
| .github/skills/common/prompts/<prompt_name>.md | ||
|
|
||
| Args: | ||
| skill_name: Skill directory name (e.g. ``"bpmn_author"``). | ||
| prompt_name: Prompt file stem (without ``.md``). | ||
| **variables: Substitution values for ``{key}`` placeholders. | ||
|
|
||
| Returns: | ||
| The rendered prompt string. | ||
|
|
||
| Raises: | ||
| FileNotFoundError: If neither skill-specific nor common prompt exists. | ||
| """ | ||
| skill_path = _SKILLS_ROOT / skill_name / "prompts" / f"{prompt_name}.md" | ||
| common_path = _SKILLS_ROOT / "common" / "prompts" / f"{prompt_name}.md" | ||
|
|
||
| for path in (skill_path, common_path): | ||
| if path.is_file(): | ||
| template = path.read_text(encoding="utf-8") | ||
| return template.format_map(_SafeDict(variables)) | ||
|
|
||
| raise FileNotFoundError( | ||
| f"Prompt '{prompt_name}.md' not found in " | ||
| f"'{skill_path}' or '{common_path}'" | ||
| ) | ||
|
|
||
|
|
||
| class _SafeDict(dict): | ||
| """dict subclass that returns ``{key}`` for missing keys instead of raising KeyError.""" | ||
|
|
||
| def __missing__(self, key: str) -> str: | ||
| return "{" + key + "}" |
There was a problem hiding this comment.
Having both common/prompts.py (Python module) and common/prompts/ (directory of .md templates) is a naming collision risk. While it works today because common/prompts/ lacks __init__.py, if anyone ever adds one (e.g., to organize prompt utilities), it would shadow the prompts.py module and break all from common.prompts import load_prompt imports. Consider renaming either the module (e.g., prompt_loader.py) or the directory (e.g., prompt_templates/) to avoid this fragile naming situation.
There was a problem hiding this comment.
Already applied in c87fd25 — renamed common/prompts.py → common/prompt_loader.py and updated all imports across the codebase. The common/prompts/ directory of .md templates is now safe from naming collisions.
| ] | ||
|
|
||
|
|
||
| import re |
There was a problem hiding this comment.
import re is placed at line 101, after the keyword list constants. Per PEP 8 and the pattern used in other files in this PR (e.g., bpmn_xml_validator.py, swimlane_validator.py, fsh_utils.py which all import re at the top), standard library imports should be grouped at the top of the file alongside the other imports (os, sys, Path).
There was a problem hiding this comment.
Already applied in c87fd25 — moved import re to the top of the file alongside os, sys, Path per PEP 8.
| # Only run on pull-request comments that start with /validate | ||
| if: > | ||
| github.event.issue.pull_request != null && | ||
| startsWith(github.event.comment.body, '/validate') |
There was a problem hiding this comment.
The comment says "Lets any collaborator post /validate" but the if condition doesn't verify the commenter's permissions. Any GitHub user (including non-collaborators) can post /validate in a PR comment and trigger a ghbuild.yml workflow dispatch, consuming Actions minutes. Consider adding an author_association check such as contains(fromJson('["OWNER","MEMBER","COLLABORATOR"]'), github.event.comment.author_association) to restrict the command to trusted users. The existing pr-deploy-slash.yml has the same gap, but since this is a new workflow and the PR description claims to "Fix critical security issue", this should be addressed here.
There was a problem hiding this comment.
Already applied in c87fd25 — added contains(fromJson('["OWNER","MEMBER","COLLABORATOR"]'), github.event.comment.author_association) to the if condition in pr-validate-slash.yml.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
❌ Deployment FailedBranch: Deployment failed. Please check the build log for details. |
…ce.py Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
…loader rename, workflow deps, security, keyword and docstring fixes Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
|
Propose a solution to resolve the conflicts with upstream and merge @copilot |
❌ Deployment FailedBranch: Deployment failed. Please check the build log for details. |
Co-authored-by: litlfred <662242+litlfred@users.noreply.github.com>
Resolved in |
❌ Deployment FailedBranch: Deployment failed. Please check the build log for details. |
Implements the skill library infrastructure under
.github/skills/for AI-assisted and structural validation of WHO Digital Adaptation Kit content, with BPMN authoring as the first skill category.Common module (
common/)llm_utils.py— thin utility functions around LiteLLM (20k+ GitHub stars, MIT):dak_completion(),parse_json_response(),is_llm_available(),get_llm_config(). No custom LLM facade to maintain — LiteLLM is the trusted external library providing multi-provider support (OpenAI, Anthropic, Google, etc.)prompt_loader.py—.mdtemplate loader with{variable}substitution and safe missing-key handling (renamed fromprompts.pyto avoid naming collision with theprompts/template directory)ig_errors.py— FATAL/ERROR/WARNING/INFORMATION severity levels matching IG Publisher formatig_publisher_iface.py— IG Publisher wrapper that correctly uses--txfor the Python runner (argparse) and-txfor the Java JARfsh_utils.py— FSH ID sanitization, ActorDefinition file lookupBPMN skills
bpmn_author/— XML validator (well-formedness, no Zeebe/Camunda namespaces, no duplicate IDs) + swimlane validator (lane presence, FSH-safe IDs, orphan flow node detection). BPMN authoring action loads DAK constraints, BPMN schema, and actor context into the prompt template.bpmn_import/— Validates every innermost<lane id="X">maps toinput/fsh/actors/ActorDefinition-DAK.X.fshIssue classification (
dak_authoring/)"ar"matching"pharmacist"DAK_LLM_API_KEYis setdak_authoring/actions/dak_authoring_action.py, triggered bycontent:L2label viaskill-l2-dak.ymlGitHub Actions workflows
classify-issue.yml— auto-labels on issue open/edit, gated ondak.jsonpresence; installslitellmandrequestsdependenciesskill-l{1,2,3}-review.yml,skill-translation.yml— label-triggered skill dispatch with dependency installationpr-validate-slash.yml—/validateslash command using safe dispatch pattern withauthor_associationcheck restricting to OWNER/MEMBER/COLLABORATOR (no untrusted code execution in privileged context)Infrastructure
Dockerfile(FROMhl7fhir/ig-publisher-base) +docker-compose.ymlwith service aliasescli/dak_skill.py— CLI entry point:validate,validate-ig,build-ig,import-bpmn,author,classify,interpret-errors__init__.pyfiles in allactions/subdirectories for proper Python package resolution.env.examplewith per-provider key formats, popular model names, and link to LiteLLM master model list.gitignoreupdate for.env.github/skills/labels/forcontent:L1,content:L2,content:L3,content:translationSecurity model
DAK_LLM_API_KEY) or local.envonly — never in UI/logs/validateslash command restricted to repository collaborators viaauthor_associationcheckUpstream sync
.pottranslation templates and translation infrastructure frommain(new files:changes.pot,dak-api.pot,downloads.pot,license.pot,scripts.pot,base.pot; rename:pages.pot→index.pot)run_ig_publisher.pychanges frommain(translation config moved to sushi-config.yaml, FHIR instance translation extraction fixes, base.pot source context augmentation)Original prompt
This section details on the original issue you should resolve
<issue_title>create skills infrstructure including bpmn</issue_title>
<issue_description># DAK AI Skill Library — Requirements
Status: DRAFT v0.8 (2026-03-05)
Repo: WorldHealthOrganization/smart-base
Inspiration: jtlicardo/bpmn-assistant (MIT) —
LLMFacadecopy-lifted with gratitude and attributionBPMN sautohring is one skill within smart guidelines and IG authoring
IG authoring is one category of skills. there are more (e.g. decision support execution, fhir resource authoring) that will come in future.
1. Goals
ActorDefinitioninput/business-processes/viabpmn_extractor.py+bpmn2fhirfsh.xsl; validate FSH outputSkills run as GitHub Actions (repo-owner secret) or locally via Docker (user's
.env)2. Key Conventions
2.1 BPMN Storage Path
Confirmed from
bpmn_extractor.py:glob.glob("input/business-processes/*bpmn").SVG files from the same directory are handled by
svg_extractor.py.2.2 Lane ID → ActorDefinition Mapping
The lane
@idis the bare FSH instance ID — noDAK.prefix on the lane itself.bpmn2fhirfsh.xslgenerates the file asActorDefinition-DAK.{@id}.fshandsets
* id = "DAK.{@id}"inside the FSH, but the lane@idin BPMN = the bare instance name.3. Security Model for API Keys
3.1 Two Execution Contexts
${{ secrets.DAK_LLM_API_KEY }}(repo owner sets once).envfile (gitignored)3.2 Graceful Degradation
3.3 Repo Secret Setup (one-time, by repo owner)
4. Environment Parity: Local = CI
4.1 CI Environment (authoritative)
From
ghbuild.yml— the IG build useshl7fhir/ig-publisher-base:latestwith these additions:4.2 Local Docker Image